The urgency of solving the ocean plastic pollution problem is of no question, and now may be the only opportunity we have to join this fight before its too late. Researchers all over the world have been doubling down on how to automatically collect plastic waste in the ocean, with some papers exploring the use of autonomous underwater vehicles for trash detection and removal. [1]
But this brings up the question: If an animal like a sea turtle can’t even tell the difference between a plastic bag and food like jellyfish, can AI?
In order to safely clean the ecosystem, we need to be able to properly distinguish between plastic litter and other marine life that resembles it. [2] The project explores this by using pretrained models to do image classification on a jellyfish vs underwater plastic dataset.
zenodo [4] and images.cv [5], and 1960 images were used.relu activation function, a Batch Normalization layer, a Dropout layer and a Dense layer with a sigmoid activation function. The models was run first without fine tuning.import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
from keras.layers import Conv2D, Flatten, MaxPooling2D, Dense, Dropout
from tensorflow.keras import layers, models, optimizers
from keras.models import Sequential
import glob, os, random
from keras.applications.vgg16 import VGG16
from keras.applications.xception import Xception
from tensorflow.keras.applications.resnet50 import ResNet50
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-1dntfr6o because the default path (/home/mbaluyut/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
# access data
base_path = 'plastic_vs_jellyfish/dataset'
img_list = glob.glob(os.path.join(base_path, '*/*.jpg'))
print(len(img_list))
3920
# define plotting function
def plot_results(history):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5))
# plot accuracy
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(1, len(acc) + 1)
ax1.plot(epochs, acc, 'ko', label='Training accuracy')
ax1.plot(epochs, val_acc, 'k', label='Validation accuracy')
ax1.set_title('Training and Validation accuracy')
ax1.set_xlabel('Epochs')
ax1.legend()
# plot loss
loss = history.history['loss']
val_loss = history.history['val_loss']
ax2.plot(epochs, loss, 'ro', label='Training loss')
ax2.plot(epochs, val_loss, 'r', label='Validation loss')
ax2.set_title('Training and Validation loss')
ax2.set_xlabel('Epochs')
ax2.legend()
# instantiate model
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
conv_base.summary()
2022-03-12 01:31:50.241987: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-03-12 01:32:12.531512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10419 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:08:00.0, compute capability: 6.1
Model: "vgg16" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 ================================================================= Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 _________________________________________________________________
# add as base model
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Functional) (None, 4, 4, 512) 14714688 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 256) 2097408 _________________________________________________________________ batch_normalization (BatchNo (None, 256) 1024 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 257 ================================================================= Total params: 16,813,377 Trainable params: 16,812,865 Non-trainable params: 512 _________________________________________________________________
# freezing the base
print('This is the number of trainable weights before freezing the conv base:',
len(model.trainable_weights))
conv_base.trainable = False
print('This is the number of trainable weights after freezing the conv base:',
len(model.trainable_weights))
model.summary()
This is the number of trainable weights before freezing the conv base: 32 This is the number of trainable weights after freezing the conv base: 6 Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Functional) (None, 4, 4, 512) 14714688 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 256) 2097408 _________________________________________________________________ batch_normalization (BatchNo (None, 256) 1024 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 257 ================================================================= Total params: 16,813,377 Trainable params: 2,098,177 Non-trainable params: 14,715,200 _________________________________________________________________
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.1 # ma
)
train_generator = train_datagen.flow_from_directory(
base_path,
target_size=(150, 150), # could be this
batch_size=75,
class_mode='binary',
subset='training',
seed=0
)
test_datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.1) # remove
validation_generator = test_datagen.flow_from_directory(
base_path,
target_size=(150, 150),
batch_size=75,
class_mode='binary',
subset='validation',
seed=0
)
Found 3528 images belonging to 2 classes. Found 392 images belonging to 2 classes.
# run model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator))
2022-03-12 01:32:15.101506: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/25
2022-03-12 01:32:17.398743: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8101 2022-03-12 01:32:17.920227: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-03-12 01:32:17.921103: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-03-12 01:32:17.921158: W tensorflow/stream_executor/gpu/asm_compiler.cc:77] Couldn't get ptxas version string: Internal: Couldn't invoke ptxas --version 2022-03-12 01:32:17.922400: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory 2022-03-12 01:32:17.922555: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Failed to launch ptxas Relying on driver to perform ptx compilation. Modify $PATH to customize ptxas location. This message will be only logged once.
48/48 [==============================] - 71s 1s/step - loss: 0.3294 - accuracy: 0.8676 - val_loss: 0.6670 - val_accuracy: 0.7857 Epoch 2/25 48/48 [==============================] - 33s 693ms/step - loss: 0.1984 - accuracy: 0.9240 - val_loss: 0.5415 - val_accuracy: 0.7959 Epoch 3/25 48/48 [==============================] - 32s 660ms/step - loss: 0.1929 - accuracy: 0.9252 - val_loss: 0.6235 - val_accuracy: 0.8444 Epoch 4/25 48/48 [==============================] - 33s 689ms/step - loss: 0.1884 - accuracy: 0.9283 - val_loss: 0.5338 - val_accuracy: 0.8495 Epoch 5/25 48/48 [==============================] - 32s 661ms/step - loss: 0.1542 - accuracy: 0.9425 - val_loss: 0.5952 - val_accuracy: 0.8571 Epoch 6/25 48/48 [==============================] - 33s 685ms/step - loss: 0.1479 - accuracy: 0.9425 - val_loss: 0.9045 - val_accuracy: 0.7653 Epoch 7/25 48/48 [==============================] - 32s 657ms/step - loss: 0.1506 - accuracy: 0.9459 - val_loss: 0.5883 - val_accuracy: 0.8495 Epoch 8/25 48/48 [==============================] - 32s 665ms/step - loss: 0.1524 - accuracy: 0.9439 - val_loss: 0.6774 - val_accuracy: 0.8189 Epoch 9/25 48/48 [==============================] - 32s 664ms/step - loss: 0.1720 - accuracy: 0.9317 - val_loss: 0.6599 - val_accuracy: 0.8571 Epoch 10/25 48/48 [==============================] - 32s 661ms/step - loss: 0.1467 - accuracy: 0.9470 - val_loss: 0.8174 - val_accuracy: 0.7985 Epoch 11/25 48/48 [==============================] - 32s 655ms/step - loss: 0.1570 - accuracy: 0.9459 - val_loss: 0.8999 - val_accuracy: 0.8087 Epoch 12/25 48/48 [==============================] - 32s 674ms/step - loss: 0.1404 - accuracy: 0.9490 - val_loss: 0.6572 - val_accuracy: 0.8240 Epoch 13/25 48/48 [==============================] - 32s 669ms/step - loss: 0.1336 - accuracy: 0.9495 - val_loss: 0.8865 - val_accuracy: 0.7985 Epoch 14/25 48/48 [==============================] - 33s 689ms/step - loss: 0.1302 - accuracy: 0.9566 - val_loss: 0.8224 - val_accuracy: 0.7985 Epoch 15/25 48/48 [==============================] - 34s 701ms/step - loss: 0.1324 - accuracy: 0.9512 - val_loss: 0.7097 - val_accuracy: 0.8189 Epoch 16/25 48/48 [==============================] - 33s 678ms/step - loss: 0.1257 - accuracy: 0.9498 - val_loss: 0.8340 - val_accuracy: 0.7806 Epoch 17/25 48/48 [==============================] - 32s 662ms/step - loss: 0.1262 - accuracy: 0.9563 - val_loss: 0.6937 - val_accuracy: 0.8240 Epoch 18/25 48/48 [==============================] - 32s 667ms/step - loss: 0.1274 - accuracy: 0.9512 - val_loss: 0.9128 - val_accuracy: 0.7908 Epoch 19/25 48/48 [==============================] - 31s 652ms/step - loss: 0.1208 - accuracy: 0.9580 - val_loss: 1.0490 - val_accuracy: 0.7679 Epoch 20/25 48/48 [==============================] - 33s 686ms/step - loss: 0.1261 - accuracy: 0.9538 - val_loss: 0.7992 - val_accuracy: 0.8112 Epoch 21/25 48/48 [==============================] - 33s 689ms/step - loss: 0.1098 - accuracy: 0.9617 - val_loss: 0.9263 - val_accuracy: 0.8036 Epoch 22/25 48/48 [==============================] - 32s 663ms/step - loss: 0.1287 - accuracy: 0.9521 - val_loss: 0.6525 - val_accuracy: 0.8163 Epoch 23/25 48/48 [==============================] - 32s 667ms/step - loss: 0.1275 - accuracy: 0.9566 - val_loss: 1.3060 - val_accuracy: 0.7245 Epoch 24/25 48/48 [==============================] - 32s 660ms/step - loss: 0.1303 - accuracy: 0.9558 - val_loss: 0.8203 - val_accuracy: 0.8163 Epoch 25/25 48/48 [==============================] - 32s 662ms/step - loss: 0.1238 - accuracy: 0.9532 - val_loss: 0.6923 - val_accuracy: 0.8265
# vgg16 no fine tuning
plot_results(history)
# freezing all layers except top block
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
# run model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator),
max_queue_size=30,
)
plot_results(history)
/home/msds2022/mbaluyut/.conda/envs/msds2022-ml3/lib/python3.9/site-packages/keras/engine/training.py:1972: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
Epoch 1/25 48/48 [==============================] - 34s 652ms/step - loss: 0.2559 - accuracy: 0.9065 - val_loss: 1.1203 - val_accuracy: 0.8214 Epoch 2/25 48/48 [==============================] - 31s 648ms/step - loss: 0.1592 - accuracy: 0.9402 - val_loss: 2.0404 - val_accuracy: 0.6709 Epoch 3/25 48/48 [==============================] - 31s 648ms/step - loss: 0.1587 - accuracy: 0.9410 - val_loss: 24.4221 - val_accuracy: 0.5000 Epoch 4/25 48/48 [==============================] - 32s 659ms/step - loss: 0.1597 - accuracy: 0.9416 - val_loss: 3.0259 - val_accuracy: 0.5332 Epoch 5/25 48/48 [==============================] - 32s 660ms/step - loss: 0.1108 - accuracy: 0.9569 - val_loss: 1.3446 - val_accuracy: 0.7092 Epoch 6/25 48/48 [==============================] - 32s 659ms/step - loss: 0.0831 - accuracy: 0.9714 - val_loss: 0.9220 - val_accuracy: 0.8367 Epoch 7/25 48/48 [==============================] - 32s 654ms/step - loss: 0.0953 - accuracy: 0.9660 - val_loss: 2.6343 - val_accuracy: 0.6454 Epoch 8/25 48/48 [==============================] - 33s 674ms/step - loss: 0.1160 - accuracy: 0.9595 - val_loss: 1.5999 - val_accuracy: 0.8010 Epoch 9/25 48/48 [==============================] - 33s 691ms/step - loss: 0.1422 - accuracy: 0.9470 - val_loss: 27.8408 - val_accuracy: 0.5000 Epoch 10/25 48/48 [==============================] - 33s 688ms/step - loss: 0.1292 - accuracy: 0.9561 - val_loss: 3.0466 - val_accuracy: 0.6301 Epoch 11/25 48/48 [==============================] - 32s 660ms/step - loss: 0.0751 - accuracy: 0.9768 - val_loss: 0.7763 - val_accuracy: 0.7985 Epoch 12/25 48/48 [==============================] - 32s 665ms/step - loss: 0.0728 - accuracy: 0.9745 - val_loss: 0.7535 - val_accuracy: 0.8214 Epoch 13/25 48/48 [==============================] - 32s 655ms/step - loss: 0.0746 - accuracy: 0.9739 - val_loss: 0.7784 - val_accuracy: 0.8265 Epoch 14/25 48/48 [==============================] - 32s 657ms/step - loss: 0.0605 - accuracy: 0.9793 - val_loss: 0.4076 - val_accuracy: 0.9311 Epoch 15/25 48/48 [==============================] - 32s 667ms/step - loss: 0.0532 - accuracy: 0.9821 - val_loss: 0.5630 - val_accuracy: 0.8903 Epoch 16/25 48/48 [==============================] - 33s 687ms/step - loss: 0.0381 - accuracy: 0.9855 - val_loss: 0.8500 - val_accuracy: 0.8112 Epoch 17/25 48/48 [==============================] - 33s 688ms/step - loss: 0.0413 - accuracy: 0.9850 - val_loss: 0.7298 - val_accuracy: 0.8878 Epoch 18/25 48/48 [==============================] - 34s 695ms/step - loss: 0.0380 - accuracy: 0.9895 - val_loss: 1.1657 - val_accuracy: 0.8138 Epoch 19/25 48/48 [==============================] - 32s 658ms/step - loss: 0.0667 - accuracy: 0.9773 - val_loss: 1.4021 - val_accuracy: 0.7832 Epoch 20/25 48/48 [==============================] - 33s 686ms/step - loss: 0.0493 - accuracy: 0.9816 - val_loss: 0.6409 - val_accuracy: 0.8903 Epoch 21/25 48/48 [==============================] - 32s 656ms/step - loss: 0.0350 - accuracy: 0.9867 - val_loss: 0.5349 - val_accuracy: 0.8878 Epoch 22/25 48/48 [==============================] - 33s 683ms/step - loss: 0.0446 - accuracy: 0.9850 - val_loss: 0.4038 - val_accuracy: 0.8980 Epoch 23/25 48/48 [==============================] - 32s 654ms/step - loss: 0.1560 - accuracy: 0.9464 - val_loss: 0.4706 - val_accuracy: 0.8929 Epoch 24/25 48/48 [==============================] - 32s 661ms/step - loss: 0.0606 - accuracy: 0.9787 - val_loss: 0.3753 - val_accuracy: 0.9031 Epoch 25/25 48/48 [==============================] - 32s 665ms/step - loss: 0.0446 - accuracy: 0.9844 - val_loss: 0.5606 - val_accuracy: 0.8750
# instantiate the model
res_model = ResNet50(include_top=False,
weights='imagenet',
input_shape=(150,150,3))
res_model.summary()
Model: "resnet50"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 150, 150, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 156, 156, 3) 0 input_4[0][0]
__________________________________________________________________________________________________
conv1_conv (Conv2D) (None, 75, 75, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
conv1_bn (BatchNormalization) (None, 75, 75, 64) 256 conv1_conv[0][0]
__________________________________________________________________________________________________
conv1_relu (Activation) (None, 75, 75, 64) 0 conv1_bn[0][0]
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D) (None, 77, 77, 64) 0 conv1_relu[0][0]
__________________________________________________________________________________________________
pool1_pool (MaxPooling2D) (None, 38, 38, 64) 0 pool1_pad[0][0]
__________________________________________________________________________________________________
conv2_block1_1_conv (Conv2D) (None, 38, 38, 64) 4160 pool1_pool[0][0]
__________________________________________________________________________________________________
conv2_block1_1_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_1_relu (Activation (None, 38, 38, 64) 0 conv2_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_2_conv (Conv2D) (None, 38, 38, 64) 36928 conv2_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_2_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_2_relu (Activation (None, 38, 38, 64) 0 conv2_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_0_conv (Conv2D) (None, 38, 38, 256) 16640 pool1_pool[0][0]
__________________________________________________________________________________________________
conv2_block1_3_conv (Conv2D) (None, 38, 38, 256) 16640 conv2_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_0_bn (BatchNormali (None, 38, 38, 256) 1024 conv2_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_3_bn (BatchNormali (None, 38, 38, 256) 1024 conv2_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_add (Add) (None, 38, 38, 256) 0 conv2_block1_0_bn[0][0]
conv2_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_out (Activation) (None, 38, 38, 256) 0 conv2_block1_add[0][0]
__________________________________________________________________________________________________
conv2_block2_1_conv (Conv2D) (None, 38, 38, 64) 16448 conv2_block1_out[0][0]
__________________________________________________________________________________________________
conv2_block2_1_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_1_relu (Activation (None, 38, 38, 64) 0 conv2_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_2_conv (Conv2D) (None, 38, 38, 64) 36928 conv2_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_2_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_2_relu (Activation (None, 38, 38, 64) 0 conv2_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_3_conv (Conv2D) (None, 38, 38, 256) 16640 conv2_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_3_bn (BatchNormali (None, 38, 38, 256) 1024 conv2_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_add (Add) (None, 38, 38, 256) 0 conv2_block1_out[0][0]
conv2_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_out (Activation) (None, 38, 38, 256) 0 conv2_block2_add[0][0]
__________________________________________________________________________________________________
conv2_block3_1_conv (Conv2D) (None, 38, 38, 64) 16448 conv2_block2_out[0][0]
__________________________________________________________________________________________________
conv2_block3_1_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_1_relu (Activation (None, 38, 38, 64) 0 conv2_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_2_conv (Conv2D) (None, 38, 38, 64) 36928 conv2_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_2_bn (BatchNormali (None, 38, 38, 64) 256 conv2_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_2_relu (Activation (None, 38, 38, 64) 0 conv2_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_3_conv (Conv2D) (None, 38, 38, 256) 16640 conv2_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_3_bn (BatchNormali (None, 38, 38, 256) 1024 conv2_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_add (Add) (None, 38, 38, 256) 0 conv2_block2_out[0][0]
conv2_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_out (Activation) (None, 38, 38, 256) 0 conv2_block3_add[0][0]
__________________________________________________________________________________________________
conv3_block1_1_conv (Conv2D) (None, 19, 19, 128) 32896 conv2_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block1_1_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_1_relu (Activation (None, 19, 19, 128) 0 conv3_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_2_conv (Conv2D) (None, 19, 19, 128) 147584 conv3_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_2_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_2_relu (Activation (None, 19, 19, 128) 0 conv3_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_0_conv (Conv2D) (None, 19, 19, 512) 131584 conv2_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block1_3_conv (Conv2D) (None, 19, 19, 512) 66048 conv3_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_0_bn (BatchNormali (None, 19, 19, 512) 2048 conv3_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_3_bn (BatchNormali (None, 19, 19, 512) 2048 conv3_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_add (Add) (None, 19, 19, 512) 0 conv3_block1_0_bn[0][0]
conv3_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_out (Activation) (None, 19, 19, 512) 0 conv3_block1_add[0][0]
__________________________________________________________________________________________________
conv3_block2_1_conv (Conv2D) (None, 19, 19, 128) 65664 conv3_block1_out[0][0]
__________________________________________________________________________________________________
conv3_block2_1_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_1_relu (Activation (None, 19, 19, 128) 0 conv3_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_2_conv (Conv2D) (None, 19, 19, 128) 147584 conv3_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_2_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_2_relu (Activation (None, 19, 19, 128) 0 conv3_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_3_conv (Conv2D) (None, 19, 19, 512) 66048 conv3_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_3_bn (BatchNormali (None, 19, 19, 512) 2048 conv3_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_add (Add) (None, 19, 19, 512) 0 conv3_block1_out[0][0]
conv3_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_out (Activation) (None, 19, 19, 512) 0 conv3_block2_add[0][0]
__________________________________________________________________________________________________
conv3_block3_1_conv (Conv2D) (None, 19, 19, 128) 65664 conv3_block2_out[0][0]
__________________________________________________________________________________________________
conv3_block3_1_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_1_relu (Activation (None, 19, 19, 128) 0 conv3_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_2_conv (Conv2D) (None, 19, 19, 128) 147584 conv3_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_2_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_2_relu (Activation (None, 19, 19, 128) 0 conv3_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_3_conv (Conv2D) (None, 19, 19, 512) 66048 conv3_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_3_bn (BatchNormali (None, 19, 19, 512) 2048 conv3_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_add (Add) (None, 19, 19, 512) 0 conv3_block2_out[0][0]
conv3_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_out (Activation) (None, 19, 19, 512) 0 conv3_block3_add[0][0]
__________________________________________________________________________________________________
conv3_block4_1_conv (Conv2D) (None, 19, 19, 128) 65664 conv3_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block4_1_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_1_relu (Activation (None, 19, 19, 128) 0 conv3_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_2_conv (Conv2D) (None, 19, 19, 128) 147584 conv3_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_2_bn (BatchNormali (None, 19, 19, 128) 512 conv3_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_2_relu (Activation (None, 19, 19, 128) 0 conv3_block4_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_3_conv (Conv2D) (None, 19, 19, 512) 66048 conv3_block4_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_3_bn (BatchNormali (None, 19, 19, 512) 2048 conv3_block4_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_add (Add) (None, 19, 19, 512) 0 conv3_block3_out[0][0]
conv3_block4_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_out (Activation) (None, 19, 19, 512) 0 conv3_block4_add[0][0]
__________________________________________________________________________________________________
conv4_block1_1_conv (Conv2D) (None, 10, 10, 256) 131328 conv3_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block1_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_1_relu (Activation (None, 10, 10, 256) 0 conv4_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_2_relu (Activation (None, 10, 10, 256) 0 conv4_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_0_conv (Conv2D) (None, 10, 10, 1024) 525312 conv3_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block1_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_0_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_add (Add) (None, 10, 10, 1024) 0 conv4_block1_0_bn[0][0]
conv4_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_out (Activation) (None, 10, 10, 1024) 0 conv4_block1_add[0][0]
__________________________________________________________________________________________________
conv4_block2_1_conv (Conv2D) (None, 10, 10, 256) 262400 conv4_block1_out[0][0]
__________________________________________________________________________________________________
conv4_block2_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_1_relu (Activation (None, 10, 10, 256) 0 conv4_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_2_relu (Activation (None, 10, 10, 256) 0 conv4_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_add (Add) (None, 10, 10, 1024) 0 conv4_block1_out[0][0]
conv4_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_out (Activation) (None, 10, 10, 1024) 0 conv4_block2_add[0][0]
__________________________________________________________________________________________________
conv4_block3_1_conv (Conv2D) (None, 10, 10, 256) 262400 conv4_block2_out[0][0]
__________________________________________________________________________________________________
conv4_block3_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_1_relu (Activation (None, 10, 10, 256) 0 conv4_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_2_relu (Activation (None, 10, 10, 256) 0 conv4_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_add (Add) (None, 10, 10, 1024) 0 conv4_block2_out[0][0]
conv4_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_out (Activation) (None, 10, 10, 1024) 0 conv4_block3_add[0][0]
__________________________________________________________________________________________________
conv4_block4_1_conv (Conv2D) (None, 10, 10, 256) 262400 conv4_block3_out[0][0]
__________________________________________________________________________________________________
conv4_block4_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_1_relu (Activation (None, 10, 10, 256) 0 conv4_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_2_relu (Activation (None, 10, 10, 256) 0 conv4_block4_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block4_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block4_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_add (Add) (None, 10, 10, 1024) 0 conv4_block3_out[0][0]
conv4_block4_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_out (Activation) (None, 10, 10, 1024) 0 conv4_block4_add[0][0]
__________________________________________________________________________________________________
conv4_block5_1_conv (Conv2D) (None, 10, 10, 256) 262400 conv4_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block5_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_1_relu (Activation (None, 10, 10, 256) 0 conv4_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_2_relu (Activation (None, 10, 10, 256) 0 conv4_block5_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block5_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block5_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_add (Add) (None, 10, 10, 1024) 0 conv4_block4_out[0][0]
conv4_block5_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_out (Activation) (None, 10, 10, 1024) 0 conv4_block5_add[0][0]
__________________________________________________________________________________________________
conv4_block6_1_conv (Conv2D) (None, 10, 10, 256) 262400 conv4_block5_out[0][0]
__________________________________________________________________________________________________
conv4_block6_1_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_1_relu (Activation (None, 10, 10, 256) 0 conv4_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_2_conv (Conv2D) (None, 10, 10, 256) 590080 conv4_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_2_bn (BatchNormali (None, 10, 10, 256) 1024 conv4_block6_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_2_relu (Activation (None, 10, 10, 256) 0 conv4_block6_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_3_conv (Conv2D) (None, 10, 10, 1024) 263168 conv4_block6_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_3_bn (BatchNormali (None, 10, 10, 1024) 4096 conv4_block6_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_add (Add) (None, 10, 10, 1024) 0 conv4_block5_out[0][0]
conv4_block6_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_out (Activation) (None, 10, 10, 1024) 0 conv4_block6_add[0][0]
__________________________________________________________________________________________________
conv5_block1_1_conv (Conv2D) (None, 5, 5, 512) 524800 conv4_block6_out[0][0]
__________________________________________________________________________________________________
conv5_block1_1_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_1_relu (Activation (None, 5, 5, 512) 0 conv5_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_2_conv (Conv2D) (None, 5, 5, 512) 2359808 conv5_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_2_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_2_relu (Activation (None, 5, 5, 512) 0 conv5_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_0_conv (Conv2D) (None, 5, 5, 2048) 2099200 conv4_block6_out[0][0]
__________________________________________________________________________________________________
conv5_block1_3_conv (Conv2D) (None, 5, 5, 2048) 1050624 conv5_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_0_bn (BatchNormali (None, 5, 5, 2048) 8192 conv5_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_3_bn (BatchNormali (None, 5, 5, 2048) 8192 conv5_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_add (Add) (None, 5, 5, 2048) 0 conv5_block1_0_bn[0][0]
conv5_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_out (Activation) (None, 5, 5, 2048) 0 conv5_block1_add[0][0]
__________________________________________________________________________________________________
conv5_block2_1_conv (Conv2D) (None, 5, 5, 512) 1049088 conv5_block1_out[0][0]
__________________________________________________________________________________________________
conv5_block2_1_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_1_relu (Activation (None, 5, 5, 512) 0 conv5_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_2_conv (Conv2D) (None, 5, 5, 512) 2359808 conv5_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_2_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_2_relu (Activation (None, 5, 5, 512) 0 conv5_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_3_conv (Conv2D) (None, 5, 5, 2048) 1050624 conv5_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_3_bn (BatchNormali (None, 5, 5, 2048) 8192 conv5_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_add (Add) (None, 5, 5, 2048) 0 conv5_block1_out[0][0]
conv5_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_out (Activation) (None, 5, 5, 2048) 0 conv5_block2_add[0][0]
__________________________________________________________________________________________________
conv5_block3_1_conv (Conv2D) (None, 5, 5, 512) 1049088 conv5_block2_out[0][0]
__________________________________________________________________________________________________
conv5_block3_1_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_1_relu (Activation (None, 5, 5, 512) 0 conv5_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_2_conv (Conv2D) (None, 5, 5, 512) 2359808 conv5_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_2_bn (BatchNormali (None, 5, 5, 512) 2048 conv5_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_2_relu (Activation (None, 5, 5, 512) 0 conv5_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D) (None, 5, 5, 2048) 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, 5, 5, 2048) 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, 5, 5, 2048) 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, 5, 5, 2048) 0 conv5_block3_add[0][0]
==================================================================================================
Total params: 23,587,712
Trainable params: 23,534,592
Non-trainable params: 53,120
__________________________________________________________________________________________________
# add as base model
model = models.Sequential()
model.add(res_model)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet50 (Functional) (None, 5, 5, 2048) 23587712 _________________________________________________________________ flatten_3 (Flatten) (None, 51200) 0 _________________________________________________________________ dense_6 (Dense) (None, 256) 13107456 _________________________________________________________________ batch_normalization_3 (Batch (None, 256) 1024 _________________________________________________________________ dropout_3 (Dropout) (None, 256) 0 _________________________________________________________________ dense_7 (Dense) (None, 1) 257 ================================================================= Total params: 36,696,449 Trainable params: 36,642,817 Non-trainable params: 53,632 _________________________________________________________________
# freezing the base
print('This is the number of trainable weights before freezing the conv base:',
len(model.trainable_weights))
res_model.trainable = False
print('This is the number of trainable weights after freezing the conv base:',
len(model.trainable_weights))
model.summary()
This is the number of trainable weights before freezing the conv base: 218 This is the number of trainable weights after freezing the conv base: 6 Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet50 (Functional) (None, 5, 5, 2048) 23587712 _________________________________________________________________ flatten_3 (Flatten) (None, 51200) 0 _________________________________________________________________ dense_6 (Dense) (None, 256) 13107456 _________________________________________________________________ batch_normalization_3 (Batch (None, 256) 1024 _________________________________________________________________ dropout_3 (Dropout) (None, 256) 0 _________________________________________________________________ dense_7 (Dense) (None, 1) 257 ================================================================= Total params: 36,696,449 Trainable params: 13,108,225 Non-trainable params: 23,588,224 _________________________________________________________________
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.1 # ma
)
train_generator = train_datagen.flow_from_directory(
base_path,
target_size=(150, 150), # could be this
batch_size=75,
class_mode='binary',
subset='training',
seed=0
)
test_datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.1) # remove
validation_generator = test_datagen.flow_from_directory(
base_path,
target_size=(150, 150),
batch_size=75,
class_mode='binary',
subset='validation',
seed=0
)
Found 3528 images belonging to 2 classes. Found 392 images belonging to 2 classes.
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator))
Epoch 1/25 48/48 [==============================] - 38s 723ms/step - loss: 0.5765 - accuracy: 0.7333 - val_loss: 0.6534 - val_accuracy: 0.6709 Epoch 2/25 48/48 [==============================] - 33s 675ms/step - loss: 0.4660 - accuracy: 0.7948 - val_loss: 1.9498 - val_accuracy: 0.5000 Epoch 3/25 48/48 [==============================] - 32s 660ms/step - loss: 0.4317 - accuracy: 0.8138 - val_loss: 1.7509 - val_accuracy: 0.5000 Epoch 4/25 48/48 [==============================] - 32s 672ms/step - loss: 0.4305 - accuracy: 0.8152 - val_loss: 3.1809 - val_accuracy: 0.5000 Epoch 5/25 48/48 [==============================] - 33s 682ms/step - loss: 0.4096 - accuracy: 0.8291 - val_loss: 0.5909 - val_accuracy: 0.7015 Epoch 6/25 48/48 [==============================] - 32s 656ms/step - loss: 0.4170 - accuracy: 0.8282 - val_loss: 4.7010 - val_accuracy: 0.5000 Epoch 7/25 48/48 [==============================] - 32s 663ms/step - loss: 0.4040 - accuracy: 0.8220 - val_loss: 5.9272 - val_accuracy: 0.5000 Epoch 8/25 48/48 [==============================] - 32s 662ms/step - loss: 0.3908 - accuracy: 0.8296 - val_loss: 2.9176 - val_accuracy: 0.5000 Epoch 9/25 48/48 [==============================] - 32s 661ms/step - loss: 0.3987 - accuracy: 0.8282 - val_loss: 5.3365 - val_accuracy: 0.5000 Epoch 10/25 48/48 [==============================] - 32s 671ms/step - loss: 0.3916 - accuracy: 0.8313 - val_loss: 0.5786 - val_accuracy: 0.7755 Epoch 11/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3884 - accuracy: 0.8325 - val_loss: 12.5934 - val_accuracy: 0.5000 Epoch 12/25 48/48 [==============================] - 34s 701ms/step - loss: 0.4049 - accuracy: 0.8356 - val_loss: 0.8404 - val_accuracy: 0.6735 Epoch 13/25 48/48 [==============================] - 32s 675ms/step - loss: 0.3676 - accuracy: 0.8410 - val_loss: 5.1917 - val_accuracy: 0.5000 Epoch 14/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3684 - accuracy: 0.8452 - val_loss: 3.7019 - val_accuracy: 0.5026 Epoch 15/25 48/48 [==============================] - 33s 689ms/step - loss: 0.3886 - accuracy: 0.8322 - val_loss: 0.5537 - val_accuracy: 0.7423 Epoch 16/25 48/48 [==============================] - 32s 662ms/step - loss: 0.3656 - accuracy: 0.8452 - val_loss: 1.1292 - val_accuracy: 0.5867 Epoch 17/25 48/48 [==============================] - 32s 666ms/step - loss: 0.3664 - accuracy: 0.8464 - val_loss: 3.5835 - val_accuracy: 0.5000 Epoch 18/25 48/48 [==============================] - 33s 678ms/step - loss: 0.3733 - accuracy: 0.8407 - val_loss: 1.7781 - val_accuracy: 0.5077 Epoch 19/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3563 - accuracy: 0.8484 - val_loss: 0.7638 - val_accuracy: 0.7679 Epoch 20/25 48/48 [==============================] - 33s 673ms/step - loss: 0.3809 - accuracy: 0.8433 - val_loss: 1.4565 - val_accuracy: 0.5587 Epoch 21/25 48/48 [==============================] - 32s 672ms/step - loss: 0.3592 - accuracy: 0.8506 - val_loss: 1.2217 - val_accuracy: 0.5663 Epoch 22/25 48/48 [==============================] - 32s 663ms/step - loss: 0.3656 - accuracy: 0.8518 - val_loss: 1.4596 - val_accuracy: 0.5714 Epoch 23/25 48/48 [==============================] - 33s 691ms/step - loss: 0.3734 - accuracy: 0.8404 - val_loss: 0.8530 - val_accuracy: 0.6276 Epoch 24/25 48/48 [==============================] - 32s 659ms/step - loss: 0.3791 - accuracy: 0.8469 - val_loss: 0.7320 - val_accuracy: 0.7321 Epoch 25/25 48/48 [==============================] - 33s 682ms/step - loss: 0.3716 - accuracy: 0.8467 - val_loss: 1.9044 - val_accuracy: 0.5000
plot_results(history)
# unfreeze
for layer in res_model.layers[:165]:
layer.trainable = False
for layer in res_model.layers[165:]:
layer.trainable = True
# run model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator),
max_queue_size=30,
)
Epoch 1/25 48/48 [==============================] - 37s 697ms/step - loss: 0.5147 - accuracy: 0.7724 - val_loss: 0.5768 - val_accuracy: 0.7602 Epoch 2/25 48/48 [==============================] - 32s 669ms/step - loss: 0.4287 - accuracy: 0.8163 - val_loss: 0.7581 - val_accuracy: 0.5230 Epoch 3/25 48/48 [==============================] - 32s 671ms/step - loss: 0.4147 - accuracy: 0.8228 - val_loss: 2.6256 - val_accuracy: 0.5000 Epoch 4/25 48/48 [==============================] - 32s 663ms/step - loss: 0.3929 - accuracy: 0.8345 - val_loss: 3.1373 - val_accuracy: 0.5000 Epoch 5/25 48/48 [==============================] - 33s 681ms/step - loss: 0.3947 - accuracy: 0.8348 - val_loss: 0.9283 - val_accuracy: 0.5281 Epoch 6/25 48/48 [==============================] - 32s 661ms/step - loss: 0.3734 - accuracy: 0.8486 - val_loss: 1.8103 - val_accuracy: 0.5000 Epoch 7/25 48/48 [==============================] - 34s 694ms/step - loss: 0.3712 - accuracy: 0.8407 - val_loss: 2.9228 - val_accuracy: 0.5000 Epoch 8/25 48/48 [==============================] - 32s 670ms/step - loss: 0.3653 - accuracy: 0.8532 - val_loss: 0.9672 - val_accuracy: 0.5587 Epoch 9/25 48/48 [==============================] - 32s 662ms/step - loss: 0.3530 - accuracy: 0.8478 - val_loss: 0.9126 - val_accuracy: 0.6071 Epoch 10/25 48/48 [==============================] - 32s 676ms/step - loss: 0.3615 - accuracy: 0.8489 - val_loss: 0.5546 - val_accuracy: 0.7602 Epoch 11/25 48/48 [==============================] - 32s 671ms/step - loss: 0.3349 - accuracy: 0.8563 - val_loss: 1.1317 - val_accuracy: 0.5663 Epoch 12/25 48/48 [==============================] - 33s 689ms/step - loss: 0.3362 - accuracy: 0.8642 - val_loss: 0.6861 - val_accuracy: 0.7704 Epoch 13/25 48/48 [==============================] - 32s 673ms/step - loss: 0.3336 - accuracy: 0.8625 - val_loss: 0.6960 - val_accuracy: 0.6760 Epoch 14/25 48/48 [==============================] - 32s 664ms/step - loss: 0.3335 - accuracy: 0.8591 - val_loss: 0.8257 - val_accuracy: 0.6735 Epoch 15/25 48/48 [==============================] - 32s 655ms/step - loss: 0.3428 - accuracy: 0.8563 - val_loss: 0.4859 - val_accuracy: 0.7883 Epoch 16/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3299 - accuracy: 0.8671 - val_loss: 0.7161 - val_accuracy: 0.7270 Epoch 17/25 48/48 [==============================] - 32s 668ms/step - loss: 0.3294 - accuracy: 0.8614 - val_loss: 0.8434 - val_accuracy: 0.6837 Epoch 18/25 48/48 [==============================] - 32s 669ms/step - loss: 0.3341 - accuracy: 0.8637 - val_loss: 0.6486 - val_accuracy: 0.7551 Epoch 19/25 48/48 [==============================] - 33s 676ms/step - loss: 0.3100 - accuracy: 0.8790 - val_loss: 0.7397 - val_accuracy: 0.7066 Epoch 20/25 48/48 [==============================] - 32s 663ms/step - loss: 0.3186 - accuracy: 0.8699 - val_loss: 1.0260 - val_accuracy: 0.6122 Epoch 21/25 48/48 [==============================] - 32s 668ms/step - loss: 0.3134 - accuracy: 0.8693 - val_loss: 0.6084 - val_accuracy: 0.7806 Epoch 22/25 48/48 [==============================] - 32s 668ms/step - loss: 0.2976 - accuracy: 0.8773 - val_loss: 1.3096 - val_accuracy: 0.5765 Epoch 23/25 48/48 [==============================] - 33s 682ms/step - loss: 0.3060 - accuracy: 0.8756 - val_loss: 0.4629 - val_accuracy: 0.7806 Epoch 24/25 48/48 [==============================] - 31s 648ms/step - loss: 0.3040 - accuracy: 0.8733 - val_loss: 0.6863 - val_accuracy: 0.7628 Epoch 25/25 48/48 [==============================] - 32s 670ms/step - loss: 0.3274 - accuracy: 0.8705 - val_loss: 0.5196 - val_accuracy: 0.7679
plot_results(history)
# unfreeze
for layer in res_model.layers[:155]:
layer.trainable = False
for layer in res_model.layers[155:]:
layer.trainable = True
# run model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator),
max_queue_size=30,
)
Epoch 1/25 48/48 [==============================] - 40s 749ms/step - loss: 0.5636 - accuracy: 0.7582 - val_loss: 0.8886 - val_accuracy: 0.5000 Epoch 2/25 48/48 [==============================] - 33s 688ms/step - loss: 0.4452 - accuracy: 0.8101 - val_loss: 1.1260 - val_accuracy: 0.5000 Epoch 3/25 48/48 [==============================] - 32s 683ms/step - loss: 0.4344 - accuracy: 0.8104 - val_loss: 2.0148 - val_accuracy: 0.5000 Epoch 4/25 48/48 [==============================] - 32s 667ms/step - loss: 0.3984 - accuracy: 0.8345 - val_loss: 2.2335 - val_accuracy: 0.5000 Epoch 5/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3772 - accuracy: 0.8359 - val_loss: 2.3624 - val_accuracy: 0.5000 Epoch 6/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3663 - accuracy: 0.8438 - val_loss: 2.0427 - val_accuracy: 0.5000 Epoch 7/25 48/48 [==============================] - 32s 657ms/step - loss: 0.3607 - accuracy: 0.8475 - val_loss: 2.1393 - val_accuracy: 0.5000 Epoch 8/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3666 - accuracy: 0.8458 - val_loss: 2.3458 - val_accuracy: 0.5000 Epoch 9/25 48/48 [==============================] - 32s 664ms/step - loss: 0.3549 - accuracy: 0.8467 - val_loss: 1.8121 - val_accuracy: 0.4974 Epoch 10/25 48/48 [==============================] - 32s 669ms/step - loss: 0.3422 - accuracy: 0.8617 - val_loss: 1.5907 - val_accuracy: 0.5000 Epoch 11/25 48/48 [==============================] - 32s 661ms/step - loss: 0.3464 - accuracy: 0.8594 - val_loss: 1.3850 - val_accuracy: 0.5077 Epoch 12/25 48/48 [==============================] - 32s 662ms/step - loss: 0.3244 - accuracy: 0.8637 - val_loss: 1.3388 - val_accuracy: 0.5179 Epoch 13/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3270 - accuracy: 0.8628 - val_loss: 0.8780 - val_accuracy: 0.6352 Epoch 14/25 48/48 [==============================] - 32s 669ms/step - loss: 0.3205 - accuracy: 0.8637 - val_loss: 0.6695 - val_accuracy: 0.7474 Epoch 15/25 48/48 [==============================] - 32s 658ms/step - loss: 0.3215 - accuracy: 0.8673 - val_loss: 0.7903 - val_accuracy: 0.7143 Epoch 16/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3191 - accuracy: 0.8625 - val_loss: 0.5523 - val_accuracy: 0.7398 Epoch 17/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3221 - accuracy: 0.8645 - val_loss: 0.7899 - val_accuracy: 0.7041 Epoch 18/25 48/48 [==============================] - 32s 656ms/step - loss: 0.3128 - accuracy: 0.8705 - val_loss: 0.7118 - val_accuracy: 0.7270 Epoch 19/25 48/48 [==============================] - 32s 672ms/step - loss: 0.3054 - accuracy: 0.8759 - val_loss: 0.5888 - val_accuracy: 0.7908 Epoch 20/25 48/48 [==============================] - 32s 658ms/step - loss: 0.3174 - accuracy: 0.8642 - val_loss: 1.4096 - val_accuracy: 0.5918 Epoch 21/25 48/48 [==============================] - 32s 664ms/step - loss: 0.3081 - accuracy: 0.8781 - val_loss: 0.6261 - val_accuracy: 0.7372 Epoch 22/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3172 - accuracy: 0.8716 - val_loss: 1.1529 - val_accuracy: 0.6148 Epoch 23/25 48/48 [==============================] - 32s 655ms/step - loss: 0.3087 - accuracy: 0.8736 - val_loss: 0.9973 - val_accuracy: 0.6301 Epoch 24/25 48/48 [==============================] - 32s 670ms/step - loss: 0.2996 - accuracy: 0.8724 - val_loss: 0.7713 - val_accuracy: 0.7245 Epoch 25/25 48/48 [==============================] - 32s 656ms/step - loss: 0.2983 - accuracy: 0.8776 - val_loss: 0.5893 - val_accuracy: 0.7781
plot_results(history)
for i, layer in enumerate(res_model.layers):
print(i, layer.name, layer.trainable)
0 input_4 False 1 conv1_pad False 2 conv1_conv False 3 conv1_bn False 4 conv1_relu False 5 pool1_pad False 6 pool1_pool False 7 conv2_block1_1_conv False 8 conv2_block1_1_bn False 9 conv2_block1_1_relu False 10 conv2_block1_2_conv False 11 conv2_block1_2_bn False 12 conv2_block1_2_relu False 13 conv2_block1_0_conv False 14 conv2_block1_3_conv False 15 conv2_block1_0_bn False 16 conv2_block1_3_bn False 17 conv2_block1_add False 18 conv2_block1_out False 19 conv2_block2_1_conv False 20 conv2_block2_1_bn False 21 conv2_block2_1_relu False 22 conv2_block2_2_conv False 23 conv2_block2_2_bn False 24 conv2_block2_2_relu False 25 conv2_block2_3_conv False 26 conv2_block2_3_bn False 27 conv2_block2_add False 28 conv2_block2_out False 29 conv2_block3_1_conv False 30 conv2_block3_1_bn False 31 conv2_block3_1_relu False 32 conv2_block3_2_conv False 33 conv2_block3_2_bn False 34 conv2_block3_2_relu False 35 conv2_block3_3_conv False 36 conv2_block3_3_bn False 37 conv2_block3_add False 38 conv2_block3_out False 39 conv3_block1_1_conv False 40 conv3_block1_1_bn False 41 conv3_block1_1_relu False 42 conv3_block1_2_conv False 43 conv3_block1_2_bn False 44 conv3_block1_2_relu False 45 conv3_block1_0_conv False 46 conv3_block1_3_conv False 47 conv3_block1_0_bn False 48 conv3_block1_3_bn False 49 conv3_block1_add False 50 conv3_block1_out False 51 conv3_block2_1_conv False 52 conv3_block2_1_bn False 53 conv3_block2_1_relu False 54 conv3_block2_2_conv False 55 conv3_block2_2_bn False 56 conv3_block2_2_relu False 57 conv3_block2_3_conv False 58 conv3_block2_3_bn False 59 conv3_block2_add False 60 conv3_block2_out False 61 conv3_block3_1_conv False 62 conv3_block3_1_bn False 63 conv3_block3_1_relu False 64 conv3_block3_2_conv False 65 conv3_block3_2_bn False 66 conv3_block3_2_relu False 67 conv3_block3_3_conv False 68 conv3_block3_3_bn False 69 conv3_block3_add False 70 conv3_block3_out False 71 conv3_block4_1_conv False 72 conv3_block4_1_bn False 73 conv3_block4_1_relu False 74 conv3_block4_2_conv False 75 conv3_block4_2_bn False 76 conv3_block4_2_relu False 77 conv3_block4_3_conv False 78 conv3_block4_3_bn False 79 conv3_block4_add False 80 conv3_block4_out False 81 conv4_block1_1_conv False 82 conv4_block1_1_bn False 83 conv4_block1_1_relu False 84 conv4_block1_2_conv False 85 conv4_block1_2_bn False 86 conv4_block1_2_relu False 87 conv4_block1_0_conv False 88 conv4_block1_3_conv False 89 conv4_block1_0_bn False 90 conv4_block1_3_bn False 91 conv4_block1_add False 92 conv4_block1_out False 93 conv4_block2_1_conv False 94 conv4_block2_1_bn False 95 conv4_block2_1_relu False 96 conv4_block2_2_conv False 97 conv4_block2_2_bn False 98 conv4_block2_2_relu False 99 conv4_block2_3_conv False 100 conv4_block2_3_bn False 101 conv4_block2_add False 102 conv4_block2_out False 103 conv4_block3_1_conv False 104 conv4_block3_1_bn False 105 conv4_block3_1_relu False 106 conv4_block3_2_conv False 107 conv4_block3_2_bn False 108 conv4_block3_2_relu False 109 conv4_block3_3_conv False 110 conv4_block3_3_bn False 111 conv4_block3_add False 112 conv4_block3_out False 113 conv4_block4_1_conv False 114 conv4_block4_1_bn False 115 conv4_block4_1_relu False 116 conv4_block4_2_conv False 117 conv4_block4_2_bn False 118 conv4_block4_2_relu False 119 conv4_block4_3_conv False 120 conv4_block4_3_bn False 121 conv4_block4_add False 122 conv4_block4_out False 123 conv4_block5_1_conv False 124 conv4_block5_1_bn False 125 conv4_block5_1_relu False 126 conv4_block5_2_conv False 127 conv4_block5_2_bn False 128 conv4_block5_2_relu False 129 conv4_block5_3_conv False 130 conv4_block5_3_bn False 131 conv4_block5_add False 132 conv4_block5_out False 133 conv4_block6_1_conv False 134 conv4_block6_1_bn False 135 conv4_block6_1_relu False 136 conv4_block6_2_conv False 137 conv4_block6_2_bn False 138 conv4_block6_2_relu False 139 conv4_block6_3_conv False 140 conv4_block6_3_bn False 141 conv4_block6_add False 142 conv4_block6_out False 143 conv5_block1_1_conv False 144 conv5_block1_1_bn False 145 conv5_block1_1_relu False 146 conv5_block1_2_conv False 147 conv5_block1_2_bn False 148 conv5_block1_2_relu False 149 conv5_block1_0_conv False 150 conv5_block1_3_conv False 151 conv5_block1_0_bn False 152 conv5_block1_3_bn False 153 conv5_block1_add False 154 conv5_block1_out False 155 conv5_block2_1_conv True 156 conv5_block2_1_bn True 157 conv5_block2_1_relu True 158 conv5_block2_2_conv True 159 conv5_block2_2_bn True 160 conv5_block2_2_relu True 161 conv5_block2_3_conv True 162 conv5_block2_3_bn True 163 conv5_block2_add True 164 conv5_block2_out True 165 conv5_block3_1_conv True 166 conv5_block3_1_bn True 167 conv5_block3_1_relu True 168 conv5_block3_2_conv True 169 conv5_block3_2_bn True 170 conv5_block3_2_relu True 171 conv5_block3_3_conv True 172 conv5_block3_3_bn True 173 conv5_block3_add True 174 conv5_block3_out True
# unfreeze
for layer in res_model.layers[:143]:
layer.trainable = False
for layer in res_model.layers[143:]:
layer.trainable = True
# run model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=25,
validation_data=validation_generator,
validation_steps=len(validation_generator),
max_queue_size=30,
)
Epoch 1/25 48/48 [==============================] - 36s 689ms/step - loss: 0.5309 - accuracy: 0.7681 - val_loss: 3.0662 - val_accuracy: 0.5000 Epoch 2/25 48/48 [==============================] - 32s 664ms/step - loss: 0.4192 - accuracy: 0.8169 - val_loss: 3.3268 - val_accuracy: 0.5000 Epoch 3/25 48/48 [==============================] - 32s 659ms/step - loss: 0.4020 - accuracy: 0.8234 - val_loss: 3.5675 - val_accuracy: 0.5000 Epoch 4/25 48/48 [==============================] - 32s 667ms/step - loss: 0.3844 - accuracy: 0.8353 - val_loss: 3.4136 - val_accuracy: 0.5000 Epoch 5/25 48/48 [==============================] - 32s 670ms/step - loss: 0.3657 - accuracy: 0.8438 - val_loss: 3.3310 - val_accuracy: 0.5000 Epoch 6/25 48/48 [==============================] - 32s 660ms/step - loss: 0.3374 - accuracy: 0.8569 - val_loss: 2.4718 - val_accuracy: 0.5000 Epoch 7/25 48/48 [==============================] - 32s 667ms/step - loss: 0.3609 - accuracy: 0.8546 - val_loss: 1.4774 - val_accuracy: 0.5128 Epoch 8/25 48/48 [==============================] - 32s 663ms/step - loss: 0.3545 - accuracy: 0.8577 - val_loss: 0.9311 - val_accuracy: 0.5485 Epoch 9/25 48/48 [==============================] - 32s 663ms/step - loss: 0.3276 - accuracy: 0.8603 - val_loss: 0.9063 - val_accuracy: 0.5944 Epoch 10/25 48/48 [==============================] - 32s 659ms/step - loss: 0.3401 - accuracy: 0.8637 - val_loss: 0.7547 - val_accuracy: 0.6658 Epoch 11/25 48/48 [==============================] - 32s 665ms/step - loss: 0.3323 - accuracy: 0.8594 - val_loss: 0.7045 - val_accuracy: 0.6888 Epoch 12/25 48/48 [==============================] - 32s 655ms/step - loss: 0.3196 - accuracy: 0.8673 - val_loss: 0.9043 - val_accuracy: 0.6531 Epoch 13/25 48/48 [==============================] - 32s 669ms/step - loss: 0.3162 - accuracy: 0.8724 - val_loss: 0.7175 - val_accuracy: 0.6862 Epoch 14/25 48/48 [==============================] - 32s 668ms/step - loss: 0.3223 - accuracy: 0.8685 - val_loss: 0.7226 - val_accuracy: 0.7117 Epoch 15/25 48/48 [==============================] - 32s 667ms/step - loss: 0.2976 - accuracy: 0.8790 - val_loss: 0.6250 - val_accuracy: 0.7628 Epoch 16/25 48/48 [==============================] - 32s 664ms/step - loss: 0.3182 - accuracy: 0.8707 - val_loss: 0.6603 - val_accuracy: 0.6913 Epoch 17/25 48/48 [==============================] - 33s 694ms/step - loss: 0.3082 - accuracy: 0.8764 - val_loss: 0.8172 - val_accuracy: 0.6531 Epoch 18/25 48/48 [==============================] - 32s 656ms/step - loss: 0.2914 - accuracy: 0.8852 - val_loss: 0.4747 - val_accuracy: 0.7730 Epoch 19/25 48/48 [==============================] - 33s 681ms/step - loss: 0.2980 - accuracy: 0.8801 - val_loss: 0.6152 - val_accuracy: 0.7449 Epoch 20/25 48/48 [==============================] - 33s 680ms/step - loss: 0.2923 - accuracy: 0.8733 - val_loss: 0.4895 - val_accuracy: 0.8061 Epoch 21/25 48/48 [==============================] - 32s 660ms/step - loss: 0.2880 - accuracy: 0.8838 - val_loss: 0.6511 - val_accuracy: 0.7321 Epoch 22/25 48/48 [==============================] - 32s 659ms/step - loss: 0.2966 - accuracy: 0.8801 - val_loss: 0.5637 - val_accuracy: 0.7781 Epoch 23/25 48/48 [==============================] - 32s 670ms/step - loss: 0.3011 - accuracy: 0.8727 - val_loss: 0.8223 - val_accuracy: 0.6888 Epoch 24/25 48/48 [==============================] - 32s 654ms/step - loss: 0.2873 - accuracy: 0.8812 - val_loss: 0.4978 - val_accuracy: 0.7832 Epoch 25/25 48/48 [==============================] - 32s 665ms/step - loss: 0.2965 - accuracy: 0.8764 - val_loss: 0.5223 - val_accuracy: 0.7730
plot_results(history)
VGG16 without fine tuning did better than 1.25 x PCC which is 65%. We can see that it reached around 85% accuracy or more. When fine tuning was done by unfreezing Conv block 5, the accuracy increased to around 91%.
For ResNet50, using the model without fine tuning gave at most around 77% accuracy, which was lower than the VGG16. Fine tuning was then done by unfreezing conv5_block3, which is the top 1 block which handles more specific features for the classification task at hand. Unfreezing this allows us to “repurpose” the model for our specific task. The accuracy here improved slightly as seen in the plot, reaching around 78%. Fine tuning by unfreezing the top 2 blocks resulted in the accuracy improving and reaching around 79%. Fine tuning further by unfreezing the top 3 blocks resulted in another improvement of accuracy, reaching around 80%.
[1] Valdenegro-Toro, M. (2016, December 1). Submerged marine debris detection with autonomous underwater vehicles. IEEE Xplore. https://doi.org/10.1109/RAHA.2016.7931907
[2] Moorton, Z., Kurt, Z., & Woo, W. (n.d.). Is the use of Deep Learning and Artificial Intelligence an appropriate means to locate debris in the ocean without harming aquatic wildlife? Retrieved March 11, 2022, from https://arxiv.org/pdf/2112.00190.pdf
[3] Tata, G. (2022, January 25). gautamtata/DeepPlastic. GitHub. https://github.com/gautamtata/DeepPlastic
[4] Ruiz-Frau, A., Hintz, H., & Jennings, C. (2019, November 11). Jellyfish dataset. Zenodo; Zenodo. https://zenodo.org/record/3545785#.Yh-gc-5BxJU
[5] images.cv | Image datasets for computer vision and machine learning. (n.d.). Images.cv. Retrieved March 11, 2022, from https://images.cv/dataset/jellyfish-image-classification-dataset